perm filename MICHIE.LET[ESS,JMC] blob
sn#005571 filedate 1972-05-16 generic text, type T, neo UTF8
00001 COMPUTER SCIENCE DEPARTMENT
00002 STANFORD UNIVERSITY
00003
00004 April 21, 1972
00005
00006
00007
00008
00009
00010 Dr. Donald Michie
00011 Systems and Information Science
00012 313 Link Hall
00013 Syracuse University
00014 Syracuse, New York 13210
00015
00100 Dear Donald,
00200
00300 The line I would take at Serbelloni is as follows:
00310
00320 1. It is probably possible to make computer programs more
00330 intelligent than humans in every way.
00340
00350 2. We do not know how long it will take to do this, because
00360 some fundamental discoveries have yet to be made. Most likely, the
00370 fundamental problems have yet to be identified. A stroke of genius
00380 like that of Newton or Einstein may be required. The time to reach
00390 this goal may be five hundred years, but it may be only five years.
00400
00410 3. The science fiction vision of robots and men as equal
00420 partners is quite unlikely, because a machine of equal intelligence
00430 to a man will become much more intelligent with just the next
00440 generation computer.
00450
00460 4. The science fiction vision of robots developing human-like
00470 goals of freedom or power is also unlikely, because it would require
00480 deliberate effort to program computers to simulate the human
00490 complexity of motivation. It is easier and better to program them to
00500 tell us how to achieve goals subject to constraints.
00510
00520 5. Adequate safeguards can be devised so that we will
00530 understand the consequences of the actions we consider taking
00540 undertaking on computer advice. In fact, with AI, we will understand
00550 the consequences of alternate policies much better than we understand
00560 them now. Therefore, the science fiction vision of people giving
00570 computers control of decisions and the actions taken having startling
00580 consequences is also unlikely.
00590
00600 6. The largest unknown about the effect of AI is its effect
00610 on human motivation, but this is part of the larger problem of the
00620 effect on human motivation of achieving full knowledge of the
00630 universe and power limited only by physical law. Is there an
00640 infinite amount to be discovered? Will humanity be motivated to
00650 occupy the whole galaxy etc?
00660
00670 7. A lot depends on whether AI is achieved by human
00680 understanding the nature of intelligence and programming computers in
00690 accordance with this knowledge. This is the approach I advocate, and
00700 the approach I think most likely to succeed.
00710
00720 8. An alternate approach is to build a self-improving system
00730 that improves itself without the improvements being understood by
00740 humans. This approach may be dangerous if done without precautions
00750 and understanding. A machine programmed to induce its designer to
00760 press the reward button may find it easier to do this by controlling
00770 him than by catering to his whims. The present efforts along this
00780 line, however, are too primitive to be dangerous.
00790
00800 9. Policy in the use of human level or better AI can wait
00810 till we know more about it. When we understand it well enough to
00820 produce it, we will know much more about the consequences of the
00830 various policies. Moreover, if we had an intelligent system, one of
00840 the first questions to ask it would be the consequences of the
00850 various policies that might be adopted about its use.
00860
00870 It might be suggested that humans should not use AI in making
00880 scientific discoveries just as mountain climbers don't consider
00890 landing on top of the mountain with a helicopter. It seems to me
00900 that if science is to become a game, better later than sooner. We
00910 are still quite a ways from getting any help at all from AI in
00920 scientific work, and there are too many urgent technological problems
00930 that humanity must solve to give up any method that might be of help.
00940
00950 10. Besides the consequences of human level AI, one should
00960 also consider the consequences of low level developments which are
00970 based on our present knowledge. It seems to me that these are not
00980 qualitatively different from other advances in productivity. In
00990 particular, they do not have as much social consequence as will the
01000 development of the home console and the information systems
01010 associated with it.
01020
01030 11. In view of all this, I am somewhat doubtful about the
01040 utility of the Serbelloni conference. We may just waste each other's
01050 time.
01060
01070 Sincerely yours,
01080
01090
01100
01110 John McCarthy
01120
01130
01140 JMC:barbara